Goto

Collaborating Authors

 wearable system


When neural implant meets multimodal LLM: A dual-loop system for neuromodulation and naturalistic neuralbehavioral research

Wang, Edward Hong, Wen, Cynthia Xin

arXiv.org Artificial Intelligence

We propose a novel dual-loop system that synergistically combines responsive neurostimulation (RNS) implants with artificial intelligence-driven wearable devices for treating post-traumatic stress disorder (PTSD) and enabling naturalistic brain research. In PTSD Therapy Mode, an implanted closed-loop neural device monitors amygdala activity and provides on-demand stimulation upon detecting pathological theta oscillations, while an ensemble of wearables (smart glasses, smartwatches, smartphones) uses multimodal large language model (LLM) analysis of sensory data to detect environmental or physiological PTSD triggers and deliver timely audiovisual interventions. Logged events from both the neural and wearable loops are analyzed to personalize trigger detection and progressively transition patients to non-invasive interventions. In Neuroscience Research Mode, the same platform is adapted for real-world brain activity capture. Wearable-LLM systems recognize naturalistic events (social interactions, emotional situations, compulsive behaviors, decision making) and signal implanted RNS devices (via wireless triggers) to record synchronized intracranial data during these moments. This approach builds on recent advances in mobile intracranial EEG recording and closed-loop neuromodulation in humans (BRAIN Initiative, 2023) (Mobbs et al., 2021). We discuss how our interdisciplinary system could revolutionize PTSD therapy and cognitive neuroscience by enabling 24/7 monitoring, context-aware intervention, and rich data collection outside traditional labs. The vision is a future where AI-enhanced devices continuously collaborate with the human brain, offering therapeutic support and deep insights into neural function, with the resulting real-world context rich neural data, in turn, accelerating the development of more biologically-grounded and human-centric AI.


MetaWearS: A Shortcut in Wearable Systems Lifecycle with Only a Few Shots

Amirshahi, Alireza, Toosi, Maedeh H., Mohammadi, Siamak, Albini, Stefano, Schiavone, Pasquale Davide, Ansaloni, Giovanni, Aminifar, Amir, Atienza, David

arXiv.org Artificial Intelligence

Wearable systems provide continuous health monitoring and can lead to early detection of potential health issues. However, the lifecycle of wearable systems faces several challenges. First, effective model training for new wearable devices requires substantial labeled data from various subjects collected directly by the wearable. Second, subsequent model updates require further extensive labeled data for retraining. Finally, frequent model updating on the wearable device can decrease the battery life in long-term data monitoring. Addressing these challenges, in this paper, we propose MetaWearS, a meta-learning method to reduce the amount of initial data collection required. Moreover, our approach incorporates a prototypical updating mechanism, simplifying the update process by modifying the class prototype rather than retraining the entire model. We explore the performance of MetaWearS in two case studies, namely, the detection of epileptic seizures and the detection of atrial fibrillation. We show that by fine-tuning with just a few samples, we achieve 70% and 82% AUC for the detection of epileptic seizures and the detection of atrial fibrillation, respectively. Compared to a conventional approach, our proposed method performs better with up to 45% AUC. Furthermore, updating the model with only 16 minutes of additional labeled data increases the AUC by up to 5.3%. Finally, MetaWearS reduces the energy consumption for model updates by 456x and 418x for epileptic seizure and AF detection, respectively.


Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure Detection for Low-Power Wearable Internet of Things Systems

Baghersalimi, Saleh, Amirshahi, Alireza, Forooghifar, Farnaz, Teijeiro, Tomas, Aminifar, Amir, Atienza, David

arXiv.org Artificial Intelligence

Integrating low-power wearable Internet of Things (IoT) systems into routine health monitoring is an ongoing challenge. Recent advances in the computation capabilities of wearables make it possible to target complex scenarios by exploiting multiple biosignals and using high-performance algorithms, such as Deep Neural Networks (DNNs). There is, however, a trade-off between performance of the algorithms and the low-power requirements of IoT platforms with limited resources. Besides, physically larger and multi-biosignal-based wearables bring significant discomfort to the patients. Consequently, reducing power consumption and discomfort is necessary for patients to use IoT devices continuously during everyday life. To overcome these challenges, in the context of epileptic seizure detection, we propose a many-to-one signals knowledge distillation approach targeting single-biosignal processing in IoT wearable systems. The starting point is to get a highly-accurate multi-biosignal DNN, then apply our approach to develop a single-biosignal DNN solution for IoT systems that achieves an accuracy comparable to the original multi-biosignal DNN. To assess the practicality of our approach to real-life scenarios, we perform a comprehensive simulation experiment analysis on several state-of-the-art edge computing platforms, such as Kendryte K210 and Raspberry Pi Zero.


PAL: Intelligence Augmentation using Egocentric Visual Context Detection

Khan, Mina, Maes, Pattie

arXiv.org Artificial Intelligence

Egocentric visual context detection can support intelligence augmentation applications. We created a wearable system, called PAL, for wearable, personalized, and privacy-preserving egocentric visual context detection. PAL has a wearable device with a camera, heart-rate sensor, on-device deep learning, and audio input/output. PAL also has a mobile/web application for personalized context labeling. We used on-device deep learning models for generic object and face detection, low-shot custom face and context recognition (e.g., activities like brushing teeth), and custom context clustering (e.g., indoor locations). The models had over 80\% accuracy in in-the-wild contexts (~1000 images) and we tested PAL for intelligence augmentation applications like behavior change. We have made PAL is open-source to further support intelligence augmentation using personalized and privacy-preserving egocentric visual contexts.


A wearable system to assist visually impaired people

#artificialintelligence

New technological advances could have important implications for those affected by disabilities, offering valuable assistance throughout their everyday lives. One key example of this is the guidance that technological tools could provide to the visually impaired (VI), individuals that are either partially or entirely blind. With this in mind, researchers at CloudMinds Technologies Inc., in China, have recently created a new deep learning-powered wearable assistive system for VI individuals. This system, presented in a paper pre-published on arXiv, consists of a wearable terminal, a powerful processor and a smartphone. The wearable terminal has two key components, an RGBD camera and an earphone.



How does it make you feel? Wearable system predicts wearer's mood

#artificialintelligence

By predicting moods, AI like this can help ease social interactions for people who find them difficult. Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute of Medical Engineering and Science (IMES) have developed a wearable they say can predict the mood of its wearer by analyzing speech patterns and physiological signs. The system may someday serve as a social coach for people with anxiety or Asperger's syndrome. "Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious," Tuka Alhanai, a CSAIL graduate student who worked on the project, said in a statement. "Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket."


Google 'bans' facial recognition on Google Glass - but developers persist

AITopics Original Links

Google will not allow apps that implement facial recognition on its Google Glass product, the company says, citing privacy concerns, after an American company said it would offer a commercial service to recognise celebrities and others. Developers have pointed out though that it is possible to load apps - which Google calls "Glassware" - onto the wearable system without needing Google's permission. Those could then communicate with any of a growing number of services which say they can connect a name with a face once given a photo. Equally, users could simply upload still pictures to other online services which would provide the facial recognition service. "A'ban' is purely symbolic," commented Martin Macdonald, a marketing director for Expedia EAN who has tried Google Glass.